Against blockchains
This article argues why blockchains are a bad technology and should not be used for any serious application. I give a general overview on the tasks blockchains are designed to fulfill, and then point out why the very design goals of blockchains make for a bad technology, regardless of implementation details. In particular, I will focus on their use in crypto-currencies.
Blockchains as a distributed ledger in their known form were invented by Satoshi Nakamoto, inventor of Bitcoin. Their tasks are as follows:
- Emulate an append-only distributed ledger. In crypto-currencies, this ledger contains all the transactions that have been issued, as well as checksums for quickly accessing the system state in a verifiable way at regular checkpoints.
- Create verifiable consensus on the current state of the ledger. This allows everyone to make sure what the current state of the ledger is, and in the case of crypto-currencies, check the balances and transaction histories of all accounts.
- Allow retroactive verification/retracing of all events recorded in the ledger since the beginning of the system. This allows parties who join the system to retroactively verify everything that happened before, ensuring that the system is tamper-free.
- Allow anyone to append entries to the ledger, but only if they are valid. This makes the system censorship-resistant and free. No central operator can prevent any party from transacting.
Retroactive verifiability and probabilistic finality harm decentralisation
To fulfill the retroactive verification property, the whole history of the blockchain has to be stored forever. In a crypto-currency scenario, with a transaction rate of 100kHz, at a transaction size of 200 bytes, we have a yearly growth of
200B * 100,000Hz * 60s * 60 * 24 * 365 = 573.636TiB
Thus, every year, more than half a Pebibyte of transaction data is generated, all of which needs to be stored forever. In a century, that would amount to almost 60 Pebibyte, which would have to be stored for verification and retracing of the system's history. Although it might seem as if not every participant of the system would have to store the full history of the blockchain, due to the eventual finality property of Nakamoto blockchains, forks of any length up to the genesis block can occur, although with exponentially small probability under the honest majority of mining power assumption. To process a fork, the node needs to replay and verify all state changes from the shared ancestor block of both chains, which means that the state needs to be reproducible at any given block in a somewhat timely manner. Any cut-off for history pruning that is decided is only a bet against probabilities and can fail. If it ever fails, then the system cannot recover, as forks without common known ancestors cannot synchronise to each other, and the network would be permanently and irrecoverably split. Additionally, as time progresses, the system state becomes less and less relevant in size compared to the system's history, thereby wasting most of the system's resources as time progresses. The ever-growing hardware requirements will also make it harder and harder for volunteers to operate a node of their own, counteracting the goal of wide-spread decentralisation.
Thus, decentralisation is directly counteracted by blockchains' strong verifiability and retracing properties, and this effect becomes more and more pronounced as time passes.
There have been multiple attempts at working around the infinite history growth – such as hard-coded, periodic checkpoints – but these measures actually create a completely new genesis block, and clients that are not on the same fork at the time of the checkpoint cannot agree to the new genesis block.
Against Proof-of-Work and similar consensus algorithms
Proof of Work (PoW) and similar algorithms, such as Proof of Space (PoSpace), Proof of Capacity, etc. all share the following properties:
- Anyone can compete for the right to append a block to the blockchain by expending/reserving resources, such as computation power or digital storage space.
- Many candidates can propose blocks concurrently, leading to conflicting forks in the blockchain, and only over time one of the many competing histories becomes accepted by the majority of miners.
- If a fork with more accumulated mining power than a node's currently selected fork appears, that node will switch to the stronger fork.
A major part of the network's bandwidth is spent on the broadcasting of competing blocks, and a major part of the system's nodes' resources are spent on competing for the right to create a new block. As throughput is very limited and the greatest bottleneck of blockchain systems, a consensus algorithm that takes up a lot of bandwidth will lower the amount of bandwidth available for new transactions to be issued. For storage-intensive systems, a PoSpace-related consensus algorithm will directly inhibit the amount of system storage a node can support while also mining new blocks. Any nodes who are not large enough to contain the whole system state in addition to mining storage will either have to spend more money on hardware or reduce its mining power. For computation-intensive systems, a PoW-related consensus algorithm will directly inhibit the amount of useful computation a node can perform while also mining new blocks, such as transaction validation and state transition calculations. Thus, not only is the usable bandwidth reduced by these kinds of algorithms, but even the system-usable resources a node has are limited. As the system grows to require more resources and higher capabilities of nodes, it becomes less decentralised, as running a node becomes less and less economically viable for volunteers.
Against Proof-of-Stake and similar consensus algorithms
Proof of Stake (PoS), Proof of Authority (PoA), etc. commonly share the following properties:
- A small static or dynamic consortium of nodes authorise new blocks via voting, leading to quick consensus due to the low number of parties.
- After voting, they are forbidden from repeating the vote and vote for a different block, resulting in basically instant quasi-finality times.
- If a fork does appear, the parties who voted twice are thrown out of the consortium and some protocols punish the offenders financially.
A static consortium is generally dangerous to use, as all clients having to put permanent trust in a consortium picked by the developers means that while the developers might trust the consortium, there is no reason for any user to trust the consortium. In the crypto-currency space, the developers aren't inherently trusted by the users, instead, the code is made public and that creates the trust in the system, as users can verify what exactly the software does. It is therefore bad practice to force users to trust any specific group of people as a requirement for using the system.
Consortium-vote based blockchains usually rely on the honest supermajority (2/3) assumption to work. For dynamic consortia (especially randomly sampled ones), it is not evident that such an assumption holds, as each new consortium choice can result in a malicious consortium, and eventually, a consortium has to end up malicious. A problem here is that forks have to be reported by the next consortium in order to punish the offenders, but a node has no way of knowing whether a presented chain is honest or not, because it doesn't know whether a proof of misbehaviour was omitted in that chain it has seen. Any valid chain can be completely invalidated by creating a fork containing a proof of misbehaviour for one of the past consortia, and this proof of misbehaviour can even be retroactively submitted by the consortium in question, itself. Thus, it is not evident that these kinds of blockchains ever achieve finality at all, either.